6 research outputs found

    Energy Efficient Tapered Data Networks for Big Data Processing in IP/WDM Networks

    Get PDF
    Classically the data produced by Big Data applications is transferred through the access and core networks to be processed in data centers where the resulting data is stored. In this work we investigate improving the energy efficiency of transporting Big Data by processing the data in processing nodes of limited processing and storage capacity along its journey through the core network to the data center. The amount of data transported over the core network will be significantly reduced each time the data is processed therefore we refer to such a network as an Energy Efficient Tapered Data Network. The results of a Mixed Integer linear Programming (MILP), developed to optimize the processing of Big Data in the Energy Efficient Tapered Data Networks, show significant reduction in network power consumption up to 76%

    Greening Big Data Networks: Volume Impact

    No full text
    Tremendous volumes generated by big data applications are starting to overwhelm data centers and networks. Traditional research efforts have determined how to process these vast volumes of data inside datacenters. Nevertheless, slight attention has addressed the increase in power consumption resulting from transferring these gigantic volumes of data from the source to destination (datacenters). An efficient approach to address this challenge is to progressively processing large volumes of data as close to the source as possible and transport the reduced volume of extracted knowledge to the destination. In this article, we examine the impact of processing different big data volumes on network power consumption in a progressive manner from source to datacenters. Accordingly, a noteworthy decrease for data transferred is achieved which results in a generous reduction in network power consumption. We consider different volumes of big data chunks. We introduce a Mixed Integer Linear Programming model (MILP) to optimize the processing locations of these volumes of data and the locations of two datacenters. The results show that serving different big data volumes of uniform distribution yields higher power saving compared to the volumes of chunks with fixed size. Therefore, we obtain an average network power saving of 57%, 48%, and 35% when considering the volumes of 10-220 (uniform) Gb, 110 Gb, and 50 Gb per chunk, respectively, compared to the conventional approach where all these chunks are processed inside datacenters only

    Energy Efficient Resource Provisioning with VM Migration Heuristic for Disaggregated Server Design

    No full text
    This article introduces an energy efficient heuristic that performs resource provisioning and Virtual Machine (VM) migration in the Disaggregated Server (DS) schema. The DS is a promising paradigm for future data centers where servers’ components are disaggregated at the hardware unit levels and resources of similar type are combined in type respective pools, such as processing pools, memory pools and IO pools. We examined 1000 VM requests that demand various processing, memory and IO requirements. Requests have exponentially distributed inter arrival time and with uniformly distributed service duration periods. Resources occupied by a certain VM are released when the VM finishes its service duration. The heuristic optimises VMs allocation and dynamically migrates existing VMs to occupy newly released energy efficient resources. We assess the energy efficiency of the heuristic by applying increasing service duration periods. The results of the numerical simulation indicate that our power savings can reach up to 55% when compared to our previous study where VM service duration is infinite and resources are not released
    corecore